24 research outputs found
Optimize Power Allocation Scheme to Maximize Sum Rate in CoMP with Limited Channel State Information
Extensive use of mobile applications throws many challenges in cellular systems like cell edge
throughput, inter cell interference and spectral e�ciency. Many of these challenges have been
resolved using Coordinated Multi-Point (CoMP), developed in the Third Generation Partnership
Project for LTE-Advanced) to a great extent. CoMP cooperatively process signals from base sta-
tions that are connected to various multiple terminals (user equipment (UEs)) at transmission and
reception. This CoMP improves throughput, reduces or even removes inter-cell interference and
increases spectral e�ciency in the downlink of multi-antenna coordinated multipoint systems.
Many researchers addressed these issues assuming that BSs have the knowledge of the common
control channels dedicated to all UEs and also about the full or partial channel state information
(CSI) of all the links. From the CSI available at the BSs, multiuser interference can be managed
at the BSs. To make this feasible, UEs are responsible for collecting downlink CSI. But, CSI
measurement (instantaneous and/or statistical) is imperfect in nature because of the randomly
varying nature of the channels at random times. These incorrect CSI values available at the BSs
may, in turn, create multi-user interference. There are many techniques to suppress the multi-user
interference, among which the feedback scheme is the one which is gaining a lot of attention. In
feedback schemes, CSI information needs to be fed back to the base station from UEs in the uplink.
It is obvious, the question arises on the type and amount of feedback need to be used. Research
has been progressing in this front and some feedback techniques have been proposed. Three basic
CoMP Feedback schemes are available. Explicit or statistical channel information feedback scheme
in which channel information like channels's covariance matrix of the channel are shared between the
transmitter and receiver. Next, implicit or statistical channel information feedback which contains
information such as Channel quality indication or Precoding matrix indicator or Rank indicator. 1st
applied to TDD LTE type structure and 2nd of feedback scheme can be applied in the FDD system.
Finally, we have UE which tranmit the sounding reference signal (CSI). This type of feedback scheme
is applied to exploit channel reciprocity and to reduce channel intercell interference and this can be
applied in the TDD system. We have analyzed the scenario of LTE TDD based system. After this,
optimization of power is also required because users at the cell edge required more attention than
the user locating at the center of the cell. In my work, it shows estimated power gives exponential
divercity for high SNR as low SNR too.
In this method, a compression feedback method is analyzed to provide multi-cell spatial channel
information. It improves the feedback e�ciency and throughput. The rows and columns of the
channel matrix are compressed using Eigenmode of the user and codebook based scheme speci�ed
in LTE speci�cation. The main drawback of this scheme is that spectral e�ciency is achieved with
the cost of increased overheads for feedback and evolved NodeB (eNB). Other factor is complexity
of eNodeB which is to be addressed in future work
A Novel Beamformed Control Channel Design for LTE with Full Dimension-MIMO
The Full Dimension-MIMO (FD-MIMO) technology is capable of achieving huge
improvements in network throughput with simultaneous connectivity of a large
number of mobile wireless devices, unmanned aerial vehicles, and the Internet
of Things (IoT). In FD-MIMO, with a large number of antennae at the base
station and the ability to perform beamforming, the capacity of the physical
downlink shared channel (PDSCH) has increased a lot. However, the current
specifications of the 3rd Generation Partnership Project (3GPP) does not allow
the base station to perform beamforming techniques for the physical downlink
control channel (PDCCH), and hence, PDCCH has neither the capacity nor the
coverage of PDSCH. Therefore, PDCCH capacity will still limit the performance
of a network as it dictates the number of users that can be scheduled at a
given time instant. In Release 11, 3GPP introduced enhanced PDCCH (EPDCCH) to
increase the PDCCH capacity at the cost of sacrificing the PDSCH resources. The
problem of enhancing the PDCCH capacity within the available control channel
resources has not been addressed yet in the literature. Hence, in this paper,
we propose a novel beamformed PDCCH (BF-PDCCH) design which is aligned to the
3GPP specifications and requires simple software changes at the base station.
We rely on the sounding reference signals transmitted in the uplink to decide
the best beam for a user and ingeniously schedule the users in PDCCH. We
perform system level simulations to evaluate the performance of the proposed
design and show that the proposed BF-PDCCH achieves larger network throughput
when compared with the current state of art algorithms, PDCCH and EPDCCH
schemes
Implementation of PDSCH Receiver and CSI-RS for 5G-NR
PDSCH (Physical Downlink Shared Channel) is the data-bearing channel in 5G-NR. In order to decode data, it needs DCI (downlink control Information) from PDCCH (Physical Downlink Control Channel). PDSCH uses LDPC(Low-Density Parity Check) code for encoding the data which is an errorcorrecting code. One of the main blocks of PDSCH is Rate Matching and scrambling followed by modulation. NR supports modulation up-to 256 QAM (Quadrature Amplitude Multiplexing). AWGN and TDLC channel Model (delay spread) as defined in 3GPP specifications are used for the simulation purpose. The Channel estimator used for PDSCH is Least square followed by tone-averaging and linear interpolation in time. The block error rate performance highly depends on the channel quality between the base station and the receiver. For acquiring the channel quality, NR specifies a special type of cell-specific reference signal which can be configured for transmission on up-to 32 antenna ports. The CSI-RS resources are code, frequency and time division multiplexed. After passing through the channel the CSI receiver estimates the channel, and finds the suitable rank, the precoding matrix to be used and the MCS and feeds it back to the transmitter which is free to use the recommendation give by UE or follow its own. In either case, it signals the parameters used in the base station back to UE
Graph Neural Networks-Based User Pairing in Wireless Communication Systems
Recently, deep neural networks have emerged as a solution to solve NP-hard
wireless resource allocation problems in real-time. However, multi-layer
perceptron (MLP) and convolutional neural network (CNN) structures, which are
inherited from image processing tasks, are not optimized for wireless network
problems. As network size increases, these methods get harder to train and
generalize. User pairing is one such essential NP-hard optimization problem in
wireless communication systems that entails selecting users to be scheduled
together while minimizing interference and maximizing throughput. In this
paper, we propose an unsupervised graph neural network (GNN) approach to
efficiently solve the user pairing problem. Our proposed method utilizes the
Erdos goes neural pipeline to significantly outperform other scheduling methods
such as k-means and semi-orthogonal user scheduling (SUS). At 20 dB SNR, our
proposed approach achieves a 49% better sum rate than k-means and a staggering
95% better sum rate than SUS while consuming minimal time and resources. The
scalability of the proposed method is also explored as our model can handle
dynamic changes in network size without experiencing a substantial decrease in
performance. Moreover, our model can accomplish this without being explicitly
trained for larger or smaller networks facilitating a dynamic functionality
that cannot be achieved using CNNs or MLPs
Streaming Video QoE Modeling and Prediction: A Long Short-Term Memory Approach
HTTP based adaptive video streaming has become a popular choice of streaming
due to the reliable transmission and the flexibility offered to adapt to
varying network conditions. However, due to rate adaptation in adaptive
streaming, the quality of the videos at the client keeps varying with time
depending on the end-to-end network conditions. Further, varying network
conditions can lead to the video client running out of playback content
resulting in rebuffering events. These factors affect the user satisfaction and
cause degradation of the user quality of experience (QoE). It is important to
quantify the perceptual QoE of the streaming video users and monitor the same
in a continuous manner so that the QoE degradation can be minimized. However,
the continuous evaluation of QoE is challenging as it is determined by complex
dynamic interactions among the QoE influencing factors. Towards this end, we
present LSTM-QoE, a recurrent neural network based QoE prediction model using a
Long Short-Term Memory (LSTM) network. The LSTM-QoE is a network of cascaded
LSTM blocks to capture the nonlinearities and the complex temporal dependencies
involved in the time varying QoE. Based on an evaluation over several publicly
available continuous QoE databases, we demonstrate that the LSTM-QoE has the
capability to model the QoE dynamics effectively. We compare the proposed model
with the state-of-the-art QoE prediction models and show that it provides
superior performance across these databases. Further, we discuss the state
space perspective for the LSTM-QoE and show the efficacy of the state space
modeling approaches for QoE prediction
Modeling Continuous Video QoE Evolution: A State Space Approach
A rapid increase in the video traffic together with an increasing demand for
higher quality videos has put a significant load on content delivery networks
in the recent years. Due to the relatively limited delivery infrastructure, the
video users in HTTP streaming often encounter dynamically varying quality over
time due to rate adaptation, while the delays in video packet arrivals result
in rebuffering events. The user quality-of-experience (QoE) degrades and varies
with time because of these factors. Thus, it is imperative to monitor the QoE
continuously in order to minimize these degradations and deliver an optimized
QoE to the users. Towards this end, we propose a nonlinear state space model
for efficiently and effectively predicting the user QoE on a continuous time
basis. The QoE prediction using the proposed approach relies on a state space
that is defined by a set of carefully chosen time varying QoE determining
features. An evaluation of the proposed approach conducted on two publicly
available continuous QoE databases shows a superior QoE prediction performance
over the state-of-the-art QoE modeling approaches. The evaluation results also
demonstrate the efficacy of the selected features and the model order employed
for predicting the QoE. Finally, we show that the proposed model is completely
state controllable and observable, so that the potential of state space
modeling approaches can be exploited for further improving QoE prediction.Comment: 7 pages, 3 figures, conferenc
FLEXCRAN: Cloud radio access network prototype using OpenAirInterface
In this demo, we describe the realization of cloud radio access network (C-RAN) prototype using OpenAirInterface (OAI) software and commodity hardware. The deployment of the centralized baseband processing on the remote cloud center (RCC), and the remote radio units (RRU), connected over Ethernet fronthaul is demonstrated. Further, the demo illustrates the flexibility in deploying several cellular radio access network protocol split architectures using OAI
Meeting IMT 2030 Performance Targets: The Potential of OTFDM Waveform and Structural MIMO Technologies
The white paper focuses on several candidate technologies that could play a
crucial role in the development of 6G systems. Two of the key technologies
explored in detail are Orthogonal Time Frequency Division Multiplexing (OTFDM)
waveform and Structural MIMO (S-MIMO)
Method for accessing a channel in a wireless communication network
Embodiments herein disclose a method and a base station for accessing a channel of an unlicensed band in a wireless communication network. The method includes maintaining a plurality of virtual stations by the base station in the wireless communication network based on a value. Further, the method includes contending to access the channel using the plurality of virtual stations. Each virtual station in the plurality of virtual stations includes a contention window and a counter value
Joint Link Adaptation and Resource Allocation for Uplink in 3GPP Machine-Type Communications
Enhanced Machine-Type Communications (eMTC) is a low-power wide-area network technology introduced by the 3rd Generation Partnership Project (3GPP). The eMTC devices are of low cost and can operate in poor signal coverage areas. To ensure successful transmission in such scenarios, 3GPP has adopted the repetition of data over time. In 4G-LTE/5G-NR, the resource allocation and link adaptation are one-dimensional problems where the modulation and coding scheme (MCS) is either incremented or decremented over time. However, in eMTC, there exists a two-dimensional problem, as the base station can update either MCS or repetitions. Further, the base station has to consider the devices' limited transmit power while allocating resources in the uplink. An optimal selection of these values has a significant impact on the network capacity and the devices' battery power. Motivated by this, we present novel resource allocation, link adaptation, and scheduling algorithms for the eMTC uplink. We show that the proposed algorithms result in a suitable selection of resource blocks, MCS, repetition levels, and link adaptation. To reduce the computation load, we also propose a low-complexity sub-optimal scheduling algorithm. Through system-level simulations, we show that all the proposed algorithms significantly outperform the state-of-the-art algorithms. © 2022 IEEE